How does computer programming impact the act of designing? Are digital protocols about to substitute human activity in the project of architecture, or does the human component remain an essential one - through changed paradigms? And if so, which paradigms should we refer to? Charles Driesler and Ahmad Tabbakh retrace these fundamental questions through some recent examples, and propose the application of a GAN protocol for the post-war reconstruction of Aleppo.
For the 2019 Shenzhen Biennale of Urbanism\Architecture (UABB), titled "Urban Interactions," (21 December 2019-8 March 2020) ArchDaily is working with the curators of the "Eyes of the City" section to stimulate a discussion on how new technologies might impact architecture and urban life. The contribution below is part of a series of scientific essays selected through the “Eyes of the City” call for papers, launched in preparation of the exhibitions: international scholars were asked to send their reflection in reaction to the statement by the curators Carlo Ratti Associati, Politecnico di Torino and SCUT, which you can read here.
Protocological Architectures
The Posthuman Project
Design automation promises so much that it has, for decades, simply failed to deliver. I just want my Make Building button, and I want it now! But, as an automation maximalist with most of my contributions taking the form of somewhat clever quips on Twitter, I place the blame on design’s aversion to objective success criteria. How can I win if I don’t know what that means? Last week, though, I blamed the impossibility of any objective criteria. I had just read that architecture is something you must be socialized into[1]. How could a machine learn all that? And the week before that I was sick, so I was blaming the flu.
In May, Brian Ringley shared a series of manipulated images on Instagram [2] that asked, “what will our automated world look like when we no longer design for the human body?” He took a handful of construction vehicle stock photos and repeated one move on each: remove the operator’s cabin and leave the headless rest behind (Fig. 1). I’m not sure who he blames for our failures, but it seems to be personal. The implicit position here rejects the presence of a human at all; not designing “for the human body” seems to mean without the existence of any human body. When asked about the potential shortcomings in this omission and his faith in the reliability of a mechanized system, Ringley dismissed the question and stated, “long term this wouldn’t be a reasonable concern.” But how far can we take this “long term” defense? If the machines are intelligent enough to be so trusted with deadly equipment, are they also smart enough to design that equipment? Are “we” still in the picture? I just want someone to blame, Brian.
The posthuman project sidesteps these questions by skipping to the end. Ringley constructs a delicious image of the future, but I want to know (1) how we get there and (2) where the people went. I would like to write the prequel to Ringley’s posthuman backhoes and propose a project that is actionable today: the preposthuman [3] story, if you will.
In this paper, I assume that we live in a world of humans interacting with machines. In this world, where architects are ultimately tasked with producing instructions for the manifestation of their intent, we passed an important threshold long before arriving at the questions Ringley asks today. The invention of computer programming languages, objectively actionable instructions, empowered architects to explicitly declare their intent. We may now define the relationships between the architect, machine, and otherwise as they design in concert. And with such a clearly defined dance, we can finally know who to blame. This is the story of how we abandoned the project of design automation in favor of design protocols that orchestrated the future.
Black Boxes
Ringley’s no-humans attitude builds on decades-earlier intentions for no-architects. In 1976, Cedric Price tackled the issue of automated machine design decisions in The Generator Project. A self-critique in response to the scaffolds of his Fun Palace, where humans were able to infinitely reconfigure the space, Price worked with computer programmers to explicitly code the Generator’s recombination logic [4]. Each occupant was assigned to inhabit an autonomous cube and would be allowed to control how it might move along a predefined grid. If left idle, however, the cubes could get “bored” and independently reconfigure themselves. Each space now had at least two stakeholders: the occupant and the space itself. Price, however, present at the invention of this built product, sought to remove himself from whatever form it might finally take. He deferred that responsibility (read: blame) to the occupant.
One year later, in 1977, Christopher Alexander took architectural abdication further with his publication of A Pattern Language. Here, the architects didn’t even get as far as a buildable product. They distilled their expertise to a collection of patterns and principles that could be used in the design of some space. Participants were called upon to improve this language in pursuit of “more true, more profound invariants” of what Alexander and his team self-described as their “current best guess as to what arrangement of the physical environment will work to solve the problem” of design [5]. I can’t help but read between the lines: their hands were clean of whatever mess their occupants cobbled together.
Ringley’s deference to the participation of “other” actors, machinic or not, was clearly a hot topic in the 70s. Nicholas Negroponte opened the decade with a declaration that architects were “unnecessary and even detrimental” to the creation of the built environment [6]. In these early days of machine intelligence, Negroponte seemed to contend that being digital meant, ultimately, being nothing to the process of design – at least, not party to any blame for the built result. In Soft Architecture Machines, he articulated a potential future of environments that responded to everyone’s needs. He invoked “indigenous architecture” as precedent for his ability to now step away from the built product. He foresaw the computer as the overseer of any action necessary [7]. He argued, implicitly, that more participation meant less responsibility for him, the designer.
But what do we stand to measurably gain from these calls for greater participation in design? One (hopefully uncontroversial) answer is that it implies a greater potential for the unexpected. How else could Negroponte frame his abdication as something worthwhile? It was couched in a humble admission that he, one man, couldn’t know everything there is to know about good design [8]. Revolutionary. This admission is not good for architecture, though, if architecture is a profession charged with the invention and actualization of a determined result. The undelivered promise of design automation reminds us that machines are not magic; automated design is just participatory design where machine actors are extra participants. Abdication continues in the form of deference to whatever interface a given program offers. The question of control and responsibility is just muddied at an undefined cost.
And before we even get to the previous question: where are the participants? The men of the 70s laid out their games and, ultimately, few found them to be very fun [9]. The egalitarian future they promised us, free of the tyranny of the architect, never came to be. Today, we call our games “Revit Addins” and they are even less fun. For all the work that has gone into the project of design automation, I can’t help but feel that the lack of adoption signals that our attention has been improperly allocated. There are plenty of clever toys out there! But a focus on computational efficiency or novelty in the industry conflates freedom with financial concerns. What does this tool offer me that a cheap intern doesn’t [10]? It doesn’t seem to be enough to package our expertise into a collection of black boxes, set them on the table, and walk away.
Consider a more contemporary example: the design automation research coming out of WeWork in the past year. Office space planning, a well-constrained problem space when compared to “architecture in general,” became an obvious target for the machines. In the span of months, the team developed a sophisticated tool for packing desks based on room perimeters, released a paper on how their tool outperforms designers in several metrics [11], and then quickly realized that adoption within the company was poor because no one designs offices that way[12]. Heumann and Davis contend that their oversight was the lack of consideration for how designers interact and integrate with their algorithms. In the context of the story in this paper, though, I’d like to propose a parallel but alternative framing: failure of adoption is rooted in our failure to build trust with the user.
Say, like in this case, I’m introduced to a new collection of buttons to press that promise me a faster and better result. Devoid of certainty and control, interaction with this black box is an act of faith. I need to trust my tools, and that begins with trusting their creators. I find that hard to do when they resist the conviction to remain responsible. Or, as a lighter charge directly to Heumann and Davis: it’s difficult for me to proselytize your automated religion when its core tenets are undefined. I blame you for your choice to justify the tool with an “efficiency” measurement of the results, and I blame you for the lack of justification for the perimeter-based approach. Automation is collaboration with the author through the machine. We cannot continue to talk about our work as if it were the gift of our expertise delivered unto the designer.
Protocol Subsumes Participation
At the end of the day, I’m looking for someone to blame for two unfulfilled promises. Why, as the men of the 70s sought out to accomplish, have we not yet distributed the contributions of the architect into some sort of collective action? And why, as intuitive as it seems, have increasingly sophisticated computational systems failed to enter this arena as reliable actors? In place of answering these questions, we seem to be ejecting responsibility.
Enter protocol, our weapon in the fight against the problems of trust and participation. In articulating an understanding of protocol, I take my lead from Alexander Galloway’s explanation in his book of the same name. Using the internet as a fulcrum, he tracks how recent decades have seen social organizations transition from decentralized networks to distributed ones. Specifically, he is interested in a societal shift “away from central bureaucracies and vertical hierarchies towards a broad network of autonomous social actors [13].” With so many more actors in play, orchestrating their interaction takes high priority. Protocol, then, offers this “territorializing structure and anarchical distribution [14].”The concept existed before the digital systems we now apply it to, but rapidly increasing complexity demands the control protocol offers.
Protocol responds to the problem of trust by marrying the egalitarian intentions of Price and Alexander with a means to still exert control over an unsupervised process; protocol offers someone to blame. Galloway asserts that protocol is “a technique for achieving voluntary regulation within a contingent environment [15].” In this paper, that contingent environment is the project of design automation; protocol offers a way to package the black box of design with voluntary interest. The architect’s scope expands. She must design, in tandem, both the elective instructions and the objective goals of design. In this way, advocating for protocological design is both a continued rejection of determined proposals as the object of design and a defense of the human designer’s continued existence. The responsibility of the designer shifts to outlining a class of potentials and possibilities.
We can understand Alexander’s pattern language as an incomplete protocol. He did understand his patterns as a “web of nature” where each could only exist “to the extent that is supported by other patterns.” This is the distributed network that Galloway argues protocol thrives in. Alexander’s mistake, although noble in his pursuit of greater participation in design, was in his insistence that each pattern “imposes nothing on [the designer] [16]”. In stark contrast to the participatory project of the 70s, protocol is imposition. Alexander undercut the potential value of his system when he deferred this responsibility. As exhaustive and novel a study his patterns are, if they produced a built architecture, it was often indistinguishable from traditional methods. Intentional or not, this calls into question the value proposition of the approach.
The protocol justifies collective action with consensus and introduces a new suite of problems as a result. Participation becomes simultaneously voluntary and regulated. You must choose to play my game. As the one who inaugurated this process and declared its goals, however, I remain a stakeholder. The charge to “design with my patterns” or to “design with my automated tool” are synonymous in the sense that both ask the user to “design like me.” The protocol removes any suggestive pretenses of otherwise. Barring your complete agreement to my terms, they are at least clearly defined enough that you know where we disagree. Automation becomes conflict, instead of simply aimless or misguided.
If we can marry an ethic of indeterminacy with a high degree of designer responsibility, architects may find a way to wrest intelligence from technology without having to choose between humans or machines. I believe, as a designer and technologist, that my future relationship with machine intelligence will be defined by this paradox. Implementations must be explicit and predictable, but results are not. If architecture is a deterministic practice, then we cannot call such a stochastic approach architecture. But if uncertainty is the disqualifier, where do we draw the line? I’m still not sure. But I’m just looking for someone to blame.
Recursive Remembrance
Intent
As explained in the first part, we understand protocol as a tool to explicitly declare the relationship between designer, machine, and otherwise as they design in concert. This part acts as an example and argument through implementation.
What follows is a protocol for the reconstruction of Aleppo, Syria. Specifically, we target the thoroughly destroyed and politically charged medieval city surrounding the castle and medina. The conflict stems from incompatible bottom-up desires and top-down intentions for novel reconstruction or static preservation. In this simulation, we embed ourselves as the architects that maintain and deliver a machinic tool for the creation of design instructions. Ultimately, final design decisions and construction are carried out by Syrians on-site. We simply situate the machine as the independently intelligent liaison between conflicting images for the future of Aleppo.
The Inaugural Participant
SCOPE -- The architect aims to initiate the protocol, at this stage in time, as divorced from the developing political context as allowable. There is, today, a real human need for homes, markets, mosques, and public space to be re-introduced. The architect only claims responsibility for articulating a way forward for reconstruction. We recognize competing attitudes around preservation and the cultural significance of specific elements in the city.
HISTORY -- Reconstruction cannot mean exact replication of the pre-war condition. At a minimum, the context has been permanently altered by recent events. Further, the architect believes that the human traumas of war cannot be healed without facing the memory of it. The choice is then in what must change and in what way.
PROMISE -- The architect will deliver instructions towards the physical reconstruction of Old Aleppo that integrate programmatic needs with an ambition to restore the medina to the commercial and cultural hub it was before the war.
The Machine Participant
ALGORITHM -- This protocol employs a Generative Adversarial Network (GAN) as the machine actor in the design of Aleppo’s reconstruction. The tool is a recent development in machine learning that can, unsupervised, develop a programmatic understanding of some large collection of data. The machine actor can consider multiple disparate criteria in parallel.
LIMITATIONS -- The machine actor is attractive for its ability to seemingly “invent” and justify imagery. This comes with one major caveat to the “generative” nature of its architecture: the network can only work with the data it is given. Results may be surprising, but they are, fundamentally, reconfigurations of existing information. In the context of preservation, however, this can be leveraged as more of a strength than a weakness.
TRAINING -- The machine actor will be trained on the imagery of Aleppo before and after the war. It will then negotiate the collision of these two data sets: its understanding of pre-war Aleppo and the alterations caused by ruinous ground. This is the argument for preservation. The site, as data, has been reconfigured; we are replacing one erroneous condition (ruins) with another (our results). To achieve some degree of normalization within the data, only four governing models of different cultural motifs will be developed. Specifically, the machine actor will utilize image sets of local domes, minarets, balconies, and arches (Fig. 2).
PROMISE -- The machine will synthesize its conflicting understandings of Aleppo into a single vision for its reconstruction.
The Participant Proper
CONTROL -- In a system so heavily determined by source data, this protocol defers its sourcing to the local participant. Citizens of Aleppo, the stakeholders most heavily impacted by both the conflict and reconstruction, are asked to submit their own photographs of the city at any point in time. In this way, documentation is like voting.
INTERPRETATION -- The participant is asked to make a final determination for local construction in their interpretation of the results of the machine actor. The architect provides instruction but invokes “vernacular” architecture no further than our own act of faith. That black box belongs to the Syrian.
Endnotes
- 1 - Banham, Reyner. “A Black Box.” A Critic Writes. Berkeley: University of California Press, 1996. 292-299.
- 2 - Ringley, Brian. https://www.instagram.com/p/Bx5O3w2pRxA/
- 3 - Stevermer, Tyler. MIT. https://dspace.mit.edu/handle/1721.1/97274
- 4 - Frazer, John. An Evolutionary Architecture. London: Architectural Association, 1995. 40-41.
- 5 - Alexander, Christopher, et al. A Pattern Language: Towns, Buildings, Construction. Oxford University Press, 1977. xv.
- 6 - Negroponte, Nicholas. Soft Architecture Machines. Cambridge: MIT Press, 1974. 163.
- 7 - Ibid 168-9
- 8 - This sentiment echoed by Alexander in his introduction to Notes on the Synthesis of Form.
- 9 - Ratti, Carlo and Matthew Claudel. Open Source Architecture. Thames and Hudson, 2015. 49-51.
- 10 - David, Daniel. “Can Algorithms Design Buildings?” Architect Magazine. 24 June 2019. Web. https://www.architectmagazine.com/technology/can-algorithms-design-buildings_o
- 11 - Anderson, C., Bailey, C., Heumann, A., Davis, D. “Augmented Space Planning: Using Procedural Generation to Automate Desk Layouts.” IJAC 2, 2018. 164–177
- 12 - Heumann A., Davis D. (2020) “Humanizing Architectural Automation: A Case Study in Office Layouts.” In: Gengnagel C., Baverel O., Burry J., Ramsgaard Thomsen M., Weinzierl S. (eds) Impact: Design With All Senses. DMSB 2019. Springer, Cham
- 13 - Galloway, Alexander R. Protocol: How Control Exists After Decentralization. Cambridge: MIT Press, 2004. 32-33.
- 14 - Ibid 64
- 15 - Ibid 7. Galloway further argues that “the limits of a protocological system and the limits of possibility within that system are synonymous.”
- 16 - Alexander, Christopher, et al. A Pattern Language: Towns, Buildings, Construction. Oxford University Press, 1977. xii.
About the Authors
Charles Driesler is a self-described “robot who defected to the human side.” He is obsessed with the political impact that automation technologies will have on human lives, the nature or necessity of work, and also sometimes how this all will affect architecture.
Charles teaches Advanced Computation for the City College of New York’s M.Arch program. He also works on design automation initiatives at WeWork where he puts his previously published office space planning theories to practice. A massive proponent of open-source software, he treasures his growing list of public contributions to projects like rhino3dm.
Ahmad Tabbakh is an architectural designer based in Brooklyn, New York. Prior to earning his B.Arch degree from Pratt Institute, Ahmad completed an award-winning thesis project on the use of machine learning in the reconstruction of Aleppo, Syria. Sited in the home to both his childhood and first architectural studies, the project was a significant bookend to his undergraduate career that has spanned continents, states, and statelessness.
Ahmad specializes in digital manufacturing and the use of robotics in architectural design. Since graduation, he has been hired to help run the research and robotics program at Mancini Duffy architects.
"Urban Interactions": Bi-City Biennale of Urbanism\Architecture (Shenzhen) - 8th edition. Shenzhen, China
http://www.szhkbiennale.org.cn/
Opening in December, 2019 in Shenzhen, China, "Urban Interactions" is the 8th edition of the Bi-City Biennale of Urbanism\Architecture (UABB). The exhibition consists of two sections, namely “Eyes of the City” and “Ascending City”, which will explore the evolving relationship between urban space and technological innovation from different perspectives. The “Eyes of the City" section features MIT professor and architect Carlo Ratti as Chief Curator and Politecnico di Torino-South China University of Technology as Academic Curator. The "Ascending City" section features Chinese academician Meng Jianmin and Italian art critic Fabio Cavallucci as Chief Curators.
"Eyes of The City" section
Chief Curator: Carlo Ratti.
Academic Curator: South China-Torino Lab (Politecnico di Torino - Michele Bonino; South China University of Technology - Sun Yimin)
Executive Curators: Daniele Belleri [CRA], Edoardo Bruno, Xu Haohao
Curator of the GBA Academy: Politecnico di Milano (Adalberto Del Bo)
"Ascending City" section
Chief Curators: Meng Jianmin, Fabio Cavallucci
Co-Curator: Science and Human Imagination Center of Southern University of Science and Technology (Wu Yan)
Executive Curators: Chen Qiufan, Manuela Lietti, Wang Kuan, Zhang Li